10 research outputs found

    Matching Local Invariant Features with Contextual Information : an Experimental Evaluation

    Get PDF
    The main advantage of using local invariant features is their local character which yields robustness to occlusion and varying background. Therefore, local features have proved to be a powerful tool for finding correspondences between images, and have been employed in many applications. However, the local character limits the descriptive capability of features descriptors, and local features fail to resolve ambiguities that can occur when an image shows multiple similar regions. Considering some global information will clearly help to achieve better performances. The question is which information to use and how to use it. Context can be used to enrich the description of the features, or used in the matching step to filter out mismatches. In this paper, we compare different recent methods which use context for matching and show that better results are obtained if contextual information is used during the matching process. We evaluate the methods in two applications: wide baseline matching and object recognition, and it appears that a relaxation based approach gives the best results

    Matching Local Invariant Features: How Can Contextual Information Help?

    No full text
    International audienceLocal invariant features are a powerful tool for finding correspondences between images since they are robust to cluttered background, occlusion and viewpoint changes. However, they suffer the lack of global information and fail to resolve ambiguities that can occur when an image has multiple similar regions. Considering some global information will clearly help to achieve better performances. The question is which information to use and how to use it. While previous approaches use context for description, this paper shows that better results are obtained if contextual information is included in the matching process. We compare two different methods which use context for matching and experiments show that a relaxation based approach gives better results

    A simple and efficient eye detection method in color images

    No full text
    International audienceIn this paper we propose a simple and efficient eye detection method for face detection tasks in color images. The algorithm first detects face regions in the image using a skin color model in the normalized RGB color space. Then, eye candidates are extracted within these regions. Finally, using the anthrophological characteristics of human eyes, the pairs of eye regions are selected. The proposed method is simple and fast, since it needs no template matching step for face verification. It is robust because it can deals with face rotation. Experimental results show the validity of our approach, a correct eye detection rate of 98.4% is achieved using a subset of the AR face database

    Fast and Robust Image Matching using Contextual Information and Relaxation

    No full text
    International audienceThis tackles the difficult problem of images matching under projective transformation. Recently, several algorithms capable of handling large changes of viewpoint as well as large changes of scale have been proposed. They are based on the comparison of local, invariant descriptors which are robust to these transformations. However, since no image descriptor is robust enough to avoid mismatches, an additional step of outliers rejection is often needed. The accuracy of which strongly depends on the number of mismatches. In this paper, we show that the matching process can be made robust to ensure a very few number of mismatches based on a relaxation labeling technique. The main contribution of this work is in providing an efficient and fast implementation of a relaxation method which can deal with large sets of features. Furthermore, we show how the contextual information can be obtained and used in this robust and fast algorithm. Experiments with real data and comparison with other matching methods, clearly show the improvements in the matching results

    Prediction of Postoperative Visual Acuity in Rhegmatogenous Retinal Detachment Using OCT Images

    No full text
    Deep Learning (DL) methods, such as Convolution Neural Networks (CNNs), have shown great potential in diagnosing complex diseases. Among these diseases, Rhegmatogenous Retinal Detachment (RRD) stands out as a critical condition necessitating precise diagnosis and postoperative Visual Acuity (VA) prediction. This research introduces a DL-based Computer-Aided Diagnosis (CAD) system that utilizes Optical Coherence Tomography (OCT) images for both the diagnosis of RRD and the prediction of postoperative VA. The CAD system utilizes DL techniques and a diverse dataset, including OCT images of patients with RRD from the Hedi Raies Ophthalmology Institute of Tunis and a large public dataset of normal subjects OCT. Preprocessing steps, such as image cropping, enhancement, denoising, and resizing, are applied to the tomographic images. Data oversampling and augmentation techniques address class imbalance and improve the dataset by generating additional samples. Various DL models, including pre-trained CNN models (VGG-16, Inception-V3, Inception-ResNet-V2), Bilinear (BCNN) (BCNN (VGG – 16)2 and BCNN (Inception-V3)2), and a custom CNN architecture, are implemented for RRD diagnosis and postoperative VA prediction. The experimental outcomes demonstrate the effectiveness of the proposed CAD system in accurately diagnosing RRD and predicting postoperative VA. The system achieves high accuracy, with 99.87% for diagnosing RRD and 98.06% for predicting postoperative VA using the BCNN (VGG – 16)2 model. The developed CAD system represents a significant advancement in the field of RRD and postoperative VA prediction. By combining DL and OCT imaging, the system provides automated and accurate diagnosis, showing potential in improving patient care and treatment decisions

    A Coupled Schema of Probabilistic Atlas and Statistical Shape and Appearance Model for 3D Prostate Segmentation in MR Images

    No full text
    International audienceA hybrid framework of probabilistic atlas and statistical shape and appearance model (SSAM) is proposed to achieve 3D prostate segmentation. An initial 3D segmentation of the prostate is obtained by registering the probabilistic atlas to the test dataset with deformable Demons registration. The initial results obtained are used to initialize multiple SSAMs corresponding to the apex, central and base regions of the prostate gland to incorporate local variabilities. Multiple mean parametric models of shape and appearance are derived from principal component analysis of prior shape and intensity information of the prostate from the training data. The parameters are then modified with the prior knowledge of the optimization space to achieve 2D segmentation. The 2D labels are registered to the 3D labels generated using probabilistic atlas to constrain the pose variation and generate valid 3D shapes. The proposed method achieves a mean Dice similarity coefficient value of 0.89±0.11 and mean Hausdorff distance of 3.05±2.25 mm when validated with 15 prostate volumes of a public dataset in a leave-one-out validation framework
    corecore